631 research outputs found

    Rational Proofs with Multiple Provers

    Full text link
    Interactive proofs (IP) model a world where a verifier delegates computation to an untrustworthy prover, verifying the prover's claims before accepting them. IP protocols have applications in areas such as verifiable computation outsourcing, computation delegation, cloud computing. In these applications, the verifier may pay the prover based on the quality of his work. Rational interactive proofs (RIP), introduced by Azar and Micali (2012), are an interactive-proof system with payments, in which the prover is rational rather than untrustworthy---he may lie, but only to increase his payment. Rational proofs leverage the provers' rationality to obtain simple and efficient protocols. Azar and Micali show that RIP=IP(=PSAPCE). They leave the question of whether multiple provers are more powerful than a single prover for rational and classical proofs as an open problem. In this paper, we introduce multi-prover rational interactive proofs (MRIP). Here, a verifier cross-checks the provers' answers with each other and pays them according to the messages exchanged. The provers are cooperative and maximize their total expected payment if and only if the verifier learns the correct answer to the problem. We further refine the model of MRIP to incorporate utility gap, which is the loss in payment suffered by provers who mislead the verifier to the wrong answer. We define the class of MRIP protocols with constant, noticeable and negligible utility gaps. We give tight characterization for all three MRIP classes. We show that under standard complexity-theoretic assumptions, MRIP is more powerful than both RIP and MIP ; and this is true even the utility gap is required to be constant. Furthermore the full power of each MRIP class can be achieved using only two provers and three rounds. (A preliminary version of this paper appeared at ITCS 2016. This is the full version that contains new results.)Comment: Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science. ACM, 201

    Non-Cooperative Rational Interactive Proofs

    Get PDF
    Interactive-proof games model the scenario where an honest party interacts with powerful but strategic provers, to elicit from them the correct answer to a computational question. Interactive proofs are increasingly used as a framework to design protocols for computation outsourcing. Existing interactive-proof games largely fall into two categories: either as games of cooperation such as multi-prover interactive proofs and cooperative rational proofs, where the provers work together as a team; or as games of conflict such as refereed games, where the provers directly compete with each other in a zero-sum game. Neither of these extremes truly capture the strategic nature of service providers in outsourcing applications. How to design and analyze non-cooperative interactive proofs is an important open problem. In this paper, we introduce a mechanism-design approach to define a multi-prover interactive-proof model in which the provers are rational and non-cooperative - they act to maximize their expected utility given others\u27 strategies. We define a strong notion of backwards induction as our solution concept to analyze the resulting extensive-form game with imperfect information. We fully characterize the complexity of our proof system under different utility gap guarantees. (At a high level, a utility gap of u means that the protocol is robust against provers that may not care about a utility loss of 1/u.) We show, for example, that the power of non-cooperative rational interactive proofs with a polynomial utility gap is exactly equal to the complexity class P^{NEXP}

    Approximate Similarity Search Under Edit Distance Using Locality-Sensitive Hashing

    Get PDF
    Edit distance similarity search, also called approximate pattern matching, is a fundamental problem with widespread database applications. The goal of the problem is to preprocess n strings of length d, to quickly answer queries q of the form: if there is a database string within edit distance r of q, return a database string within edit distance cr of q. Previous approaches to this problem either rely on very large (superconstant) approximation ratios c, or very small search radii r. Outside of a narrow parameter range, these solutions are not competitive with trivially searching through all n strings. In this work we give a simple and easy-to-implement hash function that can quickly answer queries for a wide range of parameters. Specifically, our strategy can answer queries in time O?(d3^rn^{1/c}). The best known practical results require c ? r to achieve any correctness guarantee; meanwhile, the best known theoretical results are very involved and difficult to implement, and require query time that can be loosely bounded below by 24^r. Our results significantly broaden the range of parameters for which there exist nontrivial theoretical bounds, while retaining the practicality of a locality-sensitive hash function

    Adaptive MapReduce Similarity Joins

    Get PDF
    Similarity joins are a fundamental database operation. Given data sets S and R, the goal of a similarity join is to find all points x in S and y in R with distance at most r. Recent research has investigated how locality-sensitive hashing (LSH) can be used for similarity join, and in particular two recent lines of work have made exciting progress on LSH-based join performance. Hu, Tao, and Yi (PODS 17) investigated joins in a massively parallel setting, showing strong results that adapt to the size of the output. Meanwhile, Ahle, Aum\"uller, and Pagh (SODA 17) showed a sequential algorithm that adapts to the structure of the data, matching classic bounds in the worst case but improving them significantly on more structured data. We show that this adaptive strategy can be adapted to the parallel setting, combining the advantages of these approaches. In particular, we show that a simple modification to Hu et al.'s algorithm achieves bounds that depend on the density of points in the dataset as well as the total outsize of the output. Our algorithm uses no extra parameters over other LSH approaches (in particular, its execution does not depend on the structure of the dataset), and is likely to be efficient in practice

    Online List Labeling with Predictions

    Full text link
    A growing line of work shows how learned predictions can be used to break through worst-case barriers to improve the running time of an algorithm. However, incorporating predictions into data structures with strong theoretical guarantees remains underdeveloped. This paper takes a step in this direction by showing that predictions can be leveraged in the fundamental online list labeling problem. In the problem, n items arrive over time and must be stored in sorted order in an array of size Theta(n). The array slot of an element is its label and the goal is to maintain sorted order while minimizing the total number of elements moved (i.e., relabeled). We design a new list labeling data structure and bound its performance in two models. In the worst-case learning-augmented model, we give guarantees in terms of the error in the predictions. Our data structure provides strong guarantees: it is optimal for any prediction error and guarantees the best-known worst-case bound even when the predictions are entirely erroneous. We also consider a stochastic error model and bound the performance in terms of the expectation and variance of the error. Finally, the theoretical results are demonstrated empirically. In particular, we show that our data structure has strong performance on real temporal data sets where predictions are constructed from elements that arrived in the past, as is typically done in a practical use case

    Telescoping Filter: A Practical Adaptive Filter

    Get PDF
    Filters are small, fast, and approximate set membership data structures. They are often used to filter out expensive accesses to a remote set S for negative queries (that is, filtering out queries x ? S). Filters have one-sided errors: on a negative query, a filter may say "present" with a tunable false-positive probability of ?. Correctness is traded for space: filters only use log (1/?) + O(1) bits per element. The false-positive guarantees of most filters, however, hold only for a single query. In particular, if x is a false positive, a subsequent query to x is a false positive with probability 1, not ?. With this in mind, recent work has introduced the notion of an adaptive filter. A filter is adaptive if each query is a false positive with probability ?, regardless of answers to previous queries. This requires "fixing" false positives as they occur. Adaptive filters not only provide strong false positive guarantees in adversarial environments but also improve query performance on practical workloads by eliminating repeated false positives. Existing work on adaptive filters falls into two categories. On the one hand, there are practical filters, based on the cuckoo filter, that attempt to fix false positives heuristically without meeting the adaptivity guarantee. On the other hand, the broom filter is a very complex adaptive filter that meets the optimal theoretical bounds. In this paper, we bridge this gap by designing the telescoping adaptive filter (TAF), a practical, provably adaptive filter. We provide theoretical false-positive and space guarantees for our filter, along with empirical results where we compare its performance against state-of-the-art filters. We also implement the broom filter and compare it to the TAF. Our experiments show that theoretical adaptivity can lead to improved false-positive performance on practical inputs, and can be achieved while maintaining throughput that is similar to non-adaptive filters

    Resource Optimization for Program Committee Members: A Subreview Article

    Get PDF
    This paper formalizes a resource-allocation problem that is all too familiar to the seasoned program-committee member. For each submission j that the PC member has the honor of reviewing, there is a choice. The PC member can spend the time to review submission j in detail on his/her own at a cost of C_i. Alternatively, the PC member can spend the time to identify and contact peers, hoping to recruit them as subreviewers, at a cost of 1 per subreviewer. These potential subreviewers have a certain probability of rejecting each review request, and this probability increases as time goes on. Once the PC member runs out of time or unasked experts, he/she is forced to review the paper without outside assistance. This paper gives optimal solutions to several variations of the scheduling-reviewers problem. Most of the solutions from this paper are based on an iterated log function of C_i. In particular, with k rounds, the optimal solution sends the k-iterated log of C_i requests in the first round, the (k-1)-iterated log in the second round, and so forth. One of the contributions of this paper is solving this problem exactly, even when rejection probabilities may increase. Naturally, PC members must make an integral number of subreview requests. This paper gives, as an intermediate result, a linear-time algorithm to transform the artificial problem in which one can send fractional requests into the less-artificial problem in which one sends an integral number of requests. Finally, this paper considers the case where the PC member knows nothing about the probability that a potential subreviewer agrees to review the paper. This paper gives an approximation algorithm for this case, whose bounds improve as the number of rounds increases

    An evaluation of the effectiveness of the crew resource management programme in naval aviation

    Get PDF
    The US Navy’s Crew Resource Management (CRM) training programme has not been evaluated within the last decade. Reactions were evaluated by analysing 51,570 responses to an item pertaining to CRM that is part of a safety climate survey. A total of 172 responses were obtained on a knowledge test. The attitudes of 553 naval aviators were assessed using an attitudes questionnaire. The CRM mishap rate from 1997 until 2007 was evaluated. It was found that naval aviators appear to think than CRM training is useful, are generally knowledgeable of, and display positive attitudes towards, the concepts addressed in the training. However, there is a lack of evidence to support the view that CRM training is having an effect on the mishap rate. As the next generation of highly automated aircraft becomes part of naval aviation, there is a need to ensure that CRM training evolves to meet this new challenge

    The Mansion of Peace

    Get PDF
    Recitative and song. Recit begins 'Soft zephyr on thy balmy wing', Song begins 'A rose, a rose from her bosom has strayed'. In F major.Transcribed from sheet music downloaded from Lester Levy Collection at Johns Hopkins University. The song is no 3.36 in the manuscript book in Jane Austen's hand described in Gammie and McCulloch's catalogue 'Jane Austen's Music', held by the Jane Austen's House Museum, Chawton
    • …
    corecore